30 research outputs found
Virtual and Augmented Reality Techniques for Minimally Invasive Cardiac Interventions: Concept, Design, Evaluation and Pre-clinical Implementation
While less invasive techniques have been employed for some procedures, most intracardiac interventions are still performed under cardiopulmonary bypass, on the drained, arrested heart. The progress toward off-pump intracardiac interventions has been hampered by the lack of adequate visualization inside the beating heart.
This thesis describes the development, assessment, and pre-clinical implementation of a mixed reality environment that integrates pre-operative imaging and modeling with surgical tracking technologies and real-time ultrasound imaging. The intra-operative echo images are augmented with pre-operative representations of the cardiac anatomy and virtual models of the delivery instruments tracked in real time using magnetic tracking technologies. As a result, the otherwise context-less images can now be interpreted within the anatomical context provided by the anatomical models. The virtual models assist the user with the tool-to-target navigation, while real-time ultrasound ensures accurate positioning of the tool on target, providing the surgeon with sufficient information to ``see\u27\u27 and manipulate instruments in absence of direct vision.
Several pre-clinical acute evaluation studies have been conducted in vivo on swine models to assess the feasibility of the proposed environment in a clinical context. Following direct access inside the beating heart using the UCI, the proposed mixed reality environment was used to provide the necessary visualization and navigation to position a prosthetic mitral valve on the the native annulus, or to place a repair patch on a created septal defect in vivo in porcine models.
Following further development and seamless integration into the clinical workflow, we hope that the proposed mixed reality guidance environment may become a significant milestone toward enabling minimally invasive therapy on the beating heart
Learning Feature Descriptors for Pre- and Intra-operative Point Cloud Matching for Laparoscopic Liver Registration
Purpose: In laparoscopic liver surgery (LLS), pre-operative information can
be overlaid onto the intra-operative scene by registering a 3D pre-operative
model to the intra-operative partial surface reconstructed from the
laparoscopic video. To assist with this task, we explore the use of
learning-based feature descriptors, which, to our best knowledge, have not been
explored for use in laparoscopic liver registration. Furthermore, a dataset to
train and evaluate the use of learning-based descriptors does not exist.
Methods: We present the LiverMatch dataset consisting of 16 preoperative
models and their simulated intra-operative 3D surfaces. We also propose the
LiverMatch network designed for this task, which outputs per-point feature
descriptors, visibility scores, and matched points.
Results: We compare the proposed LiverMatch network with anetwork closest to
LiverMatch, and a histogram-based 3D descriptor on the testing split of the
LiverMatch dataset, which includes two unseen pre-operative models and 1400
intra-operative surfaces. Results suggest that our LiverMatch network can
predict more accurate and dense matches than the other two methods and can be
seamlessly integrated with a RANSAC-ICP-based registration algorithm to achieve
an accurate initial alignment.
Conclusion: The use of learning-based feature descriptors in LLR is
promising, as it can help achieve an accurate initial rigid alignment, which,
in turn, serves as an initialization for subsequent non-rigid registration. We
will release the dataset and code upon acceptance
Applied Sciences—Special Issue on Emerging Techniques in Imaging, Modelling and Visualization for Cardiovascular Diagnosis and Therapy
Ongoing developments in computing and data acquisition, along with continuous advances in medical imaging technology, computational modelling, robotics and visualization have revolutionized many medical specialties and, in particular, diagnostic and interventional cardiology [...
Tips on effective presentation design and delivery
For many of us oral presentations can be the prime means for communicating our ideas and our research, not only to our peers, but also to our employers and to potential customers. As students, you are no exception the prospect of an oral presentation can be daunting, the pressure is on to make a good impression with your research. That we are scientists presenting sometimes very complicated scientific ideas and results need not necessarily be a recipe for a sleep inducing "death by PowerPoint" presentation, rather there are simple ways in which we can all try and make our presentations effective and captivating. This session aims to give you some all-round pointers on preparing and delivering an effective presentation that best conveys your ideas smoothly, understandably and, most important, succinctly
Image-guided interventions and computer-integrated therapy: Quo vadis?
© 2016 Significant efforts have been dedicated to minimizing invasiveness associated with surgical interventions, most of which have been possible thanks to the developments in medical imaging, surgical navigation, visualization and display technologies. Image-guided interventions have promised to dramatically change the way therapies are delivered to many organs. However, in spite of the development of many sophisticated technologies over the past two decades, other than some isolated examples of successful implementations, minimally invasive therapy is far from enjoying the wide acceptance once envisioned. This paper provides a large-scale overview of the state-of-the-art developments, identifies several barriers thought to have hampered the wider adoption of image-guided navigation, and suggests areas of research that may potentially advance the field
Learning Deep Representations of Cardiac Structures for 4D Cine MRI Image Segmentation through Semi-Supervised Learning
Learning good data representations for medical imaging tasks ensures the preservation of relevant information and the removal of irrelevant information from the data to improve the interpretability of the learned features. In this paper, we propose a semi-supervised model—namely, combine-all in semi-supervised learning (CqSL)—to demonstrate the power of a simple combination of a disentanglement block, variational autoencoder (VAE), generative adversarial network (GAN), and a conditioning layer-based reconstructor for performing two important tasks in medical imaging: segmentation and reconstruction. Our work is motivated by the recent progress in image segmentation using semi-supervised learning (SSL), which has shown good results with limited labeled data and large amounts of unlabeled data. A disentanglement block decomposes an input image into a domain-invariant spatial factor and a domain-specific non-spatial factor. We assume that medical images acquired using multiple scanners (different domain information) share a common spatial space but differ in non-spatial space (intensities, contrast, etc.). Hence, we utilize our spatial information to generate segmentation masks from unlabeled datasets using a generative adversarial network (GAN). Finally, to reconstruct the original image, our conditioning layer-based reconstruction block recombines spatial information with random non-spatial information sampled from the generative models. Our ablation study demonstrates the benefits of disentanglement in holding domain-invariant (spatial) as well as domain-specific (non-spatial) information with high accuracy. We further apply a structured L2 similarity (SL2SIM) loss along with a mutual information minimizer (MIM) to improve the adversarially trained generative models for better reconstruction. Experimental results achieved on the STACOM 2017 ACDC cine cardiac magnetic resonance (MR) dataset suggest that our proposed (CqSL) model outperforms fully supervised and semi-supervised models, achieving an 83.2% performance accuracy even when using only 1% labeled data. We hypothesize that our proposed model has the potential to become an efficient semantic segmentation tool that may be used for domain adaptation in data-limited medical imaging scenarios, where annotations are expensive. Code, and experimental configurations will be made available publicly